Goto

Collaborating Authors

 log-concave mle




Nonparametric inference under shape constraints: past, present and future

Samworth, Richard J.

arXiv.org Machine Learning

We survey the field of nonparametric inference under shape constraints, providing a historical overview and a perspective on its current state. An outlook and some open problems offer thoughts on future directions. 1 Introduction. Traditionally, we think of statistical methods as being divided into parametric approaches, which can be restrictive, but where estimation is typically straightforward (e.g. using maximum likelihood), and nonparametric methods, which are more flexible but often require careful choices of tuning parameters. Nonparametric inference under shape constraints sits somewhere in the middle, seeking in some ways the best of both worlds. The origins of the field are often traced to Grenander (1956), who proved that there exists a unique maximum likelihood estimator (MLE) of a decreasing density on the non-negative half-line (and was able to characterise it explicitly).


Universal Inference Meets Random Projections: A Scalable Test for Log-concavity

Dunn, Robin, Gangrade, Aditya, Wasserman, Larry, Ramdas, Aaditya

arXiv.org Artificial Intelligence

Shape constraints yield flexible middle grounds between fully nonparametric and fully parametric approaches to modeling distributions of data. The specific assumption of log-concavity is motivated by applications across economics, survival modeling, and reliability theory. However, there do not currently exist valid tests for whether the underlying density of given data is log-concave. The recent universal inference methodology provides a valid test. The universal test relies on maximum likelihood estimation (MLE), and efficient methods already exist for finding the log-concave MLE. This yields the first test of log-concavity that is provably valid in finite samples in any dimension, for which we also establish asymptotic consistency results. Empirically, we find that the highest power is obtained by using random projections to convert the d-dimensional testing problem into many one-dimensional problems, leading to a simple procedure that is statistically and computationally efficient.